On maximization of the information divergence from an exponential family

نویسنده

  • Nihat Ay
چکیده

The information divergence of a probability measure P from an exponential family E over a nite set is deened as innmum of the divergences of P from Q subject to Q in E. For convex exponential families the local maximizers of this function of P are found. General exponential family E of dimension d is enlarged to an exponential family E of the dimension at most 3d + 2 such that the local maximizers are of zero divergence from E .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Duality Between Maximization of Expected Utility and Minimization of Relative Entropy When Probabilities are Imprecise

In this paper we model the problem faced by a riskaverse decision maker with a precise subjective probability distribution who bets against a risk-neutral opponent or invests in a financial market where the beliefs of the opponent or the representative agent in the market are described by a convex set of imprecise probabilities. The problem of finding the portfolio of bets or investments that m...

متن کامل

Kullback – Leibler Aggregation and Misspecified Generalized Linear Models

In a regression setup with deterministic design, we study the pure aggregation problem and introduce a natural extension from the Gaussian distribution to distributions in the exponential family. While this extension bears strong connections with generalized linear models, it does not require identifiability of the parameter or even that the model on the systematic component is true. It is show...

متن کامل

A Representation Approach for Relative Entropy Minimization with Expectation Constraints

We consider the general problem of relative entropy minimization and entropy maximization subject to expectation constraints. We show that the solutions can be represented as members of an exponential family subject to weaker conditions than previously shown, and the representation can be simplified further if an appropriate conjugate prior density is used. As a result, the solutions can be fou...

متن کامل

Learning mixtures by simplifying kernel density estimators

Gaussian mixture models are a widespread tool for modeling various and complex probability density functions. They can be estimated by various means, often using Expectation-Maximization or Kernel Density Estimation. In addition to these well known algorithms, new and promising stochastic modeling methods include Dirichlet Process mixtures and k-Maximum Likelihood Estimators. Most of the method...

متن کامل

Bounds on approximate steepest descent for likelihood maximization in exponential families

An approximate steepest descent strategy converging, in families of regular exponential densities, to maximum likelihood estimates of density functions is described. These density estimates are also obtained by an application of the principle of minimum relative entropy subject to empirical constraints. We prove tight bounds on the increase of the log-likelihood at each iteration of our strateg...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003